\(H_0\): Sample was drawn from Gaussian distribution
At \(\alpha = 0.01\) significance level:
Fail to reject \(H_0\) for X-ray (p = 0.164).
Reject \(H_0\) for UV (p = \(1 \times 10^{-20}\)).
Fail to reject \(H_0\) for log transformed UV (p = 0.028)
OK, but perhaps unnatural to transform data to match likelihood.
PSD computation is complicated because of the uneven sampling of data so use structure function analysis instead.
\[\mathrm{SF}(\tau) = \frac{1}{N(\tau)} \sum_i \left[ f(t_i) - f(t_i + \tau)\right]^2\]
where \(f(t_i)\) is the count rate at \(t_i\), \(\tau = t_j - t_i\), and \(t_j \gt t_i\).
This is basically an analysis of autocorrelations.
Good
OK
Dubious
\[ P(\theta | D) = \frac{P(D | \theta) \times P(\theta)}{P(D)}\]
\[\textrm{posterior} = \frac{\textrm{likelihood} \times \textrm{prior}}{\textrm{marginal likelihood}}\]
The Bayes Factor, \(K\), is the ratio of two marginal likelihoods.
\[K = \frac{P(D \mid M_1)}{P(D \mid M_2)} = \dots = \frac{P(M_1 | D)}{P(M_2|D)} \cdot \frac{P(M_2)}{P(M_1)}\]
If the two models have the same prior probability, the Bayes Factor is equal to the ratio of the posterior probabilities.
If \(K \gt 1\) then \(M_1\) is more supported by the data than \(M_2\).
\[\begin{align} - \log P(\boldsymbol{y}\mid\boldsymbol{t}, \theta) &= -\frac{1}{2}\boldsymbol{y}^\top (K_\theta(\boldsymbol{t},\boldsymbol{t}) + \sigma_y^2I)^{-1}\boldsymbol{y}\\ &-\frac{1}{2}\log | K_\theta(\boldsymbol{t},\boldsymbol{t}) + \sigma_y^2I|\\ &+ \frac{N}{2}\log(2\pi) \end{align}\]